Lecture 2: Subgradient Methods

نویسنده

  • John C. Duchi
چکیده

In this lecture, we discuss first order methods for the minimization of convex functions. We focus almost exclusively on subgradient-based methods, which are essentially universally applicable for convex optimization problems, because they rely very little on the structure of the problem being solved. This leads to effective but slow algorithms in classical optimization problems, however, in large scale problems arising out of machine learning and statistical tasks, subgradient methods enjoy a number of (theoretical) optimality properties and have excellent practical performance.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Introductory Lectures on Stochastic Optimization

In this set of four lectures, we study the basic analytical tools and algorithms necessary for the solution of stochastic convex optimization problems, as well as for providing various optimality guarantees associated with the methods. As we proceed through the lectures, we will be more exact about the precise problem formulations, providing a number of examples, but roughly, by a stochastic op...

متن کامل

Inexact subgradient methods over nonlinearly constrained networks

The minimization of nonlinearly constrained network flow problems can be performed by using approximate subgradient methods. The idea is to solve this kind of problem by means of primal-dual methods, given that the minimization of nonlinear network flow problems can be efficiently done by exploiting the network structure. In this work it is proposed to solve the dual problem by using 2subgradie...

متن کامل

Stochastic Subgradient Methods

Stochastic subgradient methods play an important role in machine learning. We introduced the concepts of subgradient methods and stochastic subgradient methods in this project, discussed their convergence conditions as well as the strong and weak points against their competitors. We demonstrated the application of (stochastic) subgradient methods to machine learning with a running example of tr...

متن کامل

A new Levenberg-Marquardt approach based on Conjugate gradient structure for solving absolute value equations

In this paper, we present a new approach for solving absolute value equation (AVE) whichuse Levenberg-Marquardt method with conjugate subgradient structure. In conjugate subgradientmethods the new direction obtain by combining steepest descent direction and the previous di-rection which may not lead to good numerical results. Therefore, we replace the steepest descentdir...

متن کامل

Nonsmooth analysis and subgradient methods for averaging in dynamic time warping spaces

Time series averaging in dynamic time warping (DTW) spaces has been successfully applied to improve pattern recognition systems. This article proposes and analyzes subgradient methods for the problem of finding a sample mean in DTW spaces. The class of subgradient methods generalizes existing sample mean algorithms such as DTW Barycenter Averaging (DBA). We show that DBA is a majorize-minimize ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016